# RoBERTa architecture
Polish Reranker Base Mse
Apache-2.0
This is a Polish text ranking model trained using Mean Squared Error (MSE) distillation method, with a training dataset containing 1.4 million queries and 10 million document text pairs.
Text Embedding
Transformers Other

P
sdadas
16
0
Simcse Roberta Large Zh
MIT
SimCSE(sup) is a model for Chinese sentence similarity tasks. It can encode sentences into embedding vectors and calculate the cosine similarity between sentences.
Text Embedding
Transformers Chinese

S
hellonlp
179
1
Klue Roberta Base Klue Sts
This is a model based on sentence-transformers that can map sentences and paragraphs to a 768-dimensional dense vector space, suitable for tasks such as clustering and semantic search.
Text Embedding
K
shangrilar
165
0
Ag Nli Bert Mpnet Base Uncased Sentence Similarity V1
This is a sentence-transformers model that maps sentences and paragraphs to a 768-dimensional dense vector space, suitable for tasks like clustering or semantic search.
Text Embedding
Transformers Other

A
abbasgolestani
18
0
Simcse Indoroberta Base
This is a sentence transformer model based on IndoRoberta, capable of mapping Indonesian sentences and paragraphs into a 768-dimensional vector space, suitable for sentence similarity computation and semantic search tasks.
Text Embedding
Transformers Other

S
LazarusNLP
15
0
Legal Dutch Roberta Base
Dutch legal domain base model based on RoBERTa architecture
Large Language Model
Transformers

L
joelniklaus
25
2
Legal Xlm Roberta Base
CC
A multilingual XLM-RoBERTa model pre-trained on legal data, supporting legal text processing in 24 European languages
Large Language Model
Transformers Supports Multiple Languages

L
joelniklaus
387
3
Congen Simcse Model Roberta Base Thai
Apache-2.0
This is a Thai sentence similarity model based on the RoBERTa architecture, capable of mapping sentences into a 768-dimensional vector space, suitable for tasks like semantic search.
Text Embedding
Transformers

C
kornwtp
86
1
Maltberta
MaltBERTa is a large-scale pretrained language model based on Maltese text, using the RoBERTa architecture, developed by the MaCoCu project.
Large Language Model Other
M
MaCoCu
26
0
Robertuito Pos
Spanish/English POS tagging model based on RoBERTuito, optimized for Twitter text
Sequence Labeling
Transformers Spanish

R
pysentimiento
188
0
Roberta Base Ca V2 Cased Ner
Apache-2.0
Catalan named entity recognition model based on RoBERTa architecture, fine-tuned on the AnCora-Ca-NER dataset
Sequence Labeling
Transformers Other

R
projecte-aina
986
0
Xlm Roberta Base Finetuned Panx All
MIT
Named entity recognition model fine-tuned on multilingual datasets based on xlm-roberta-base
Large Language Model
Transformers

X
flood
15
0
Roberta Base Coptic Upos
RoBERTa-based model for Coptic POS tagging and dependency parsing
Sequence Labeling
Transformers Other

R
KoichiYasuoka
67
0
Tibetan Roberta Causal Base
MIT
This is a Tibetan pre-trained causal language model based on the RoBERTa architecture, primarily designed for Tibetan text generation tasks.
Large Language Model
Transformers Other

T
sangjeedondrub
156
5
Roberta Base Turkish Uncased
MIT
RoBERTa base model pre-trained on Turkish, trained with 38GB of Turkish corpus
Large Language Model
Transformers Other

R
burakaytan
57
16
Bsc Bio Es
Apache-2.0
Pre-trained language model specifically designed for the Spanish biomedical domain, suitable for clinical NLP tasks
Large Language Model
Transformers Spanish

B
PlanTL-GOB-ES
162
5
Roberta Retrained Ru Covid Papers
A Russian model fine-tuned on an unspecified dataset based on roberta-retrained_ru_covid, potentially related to processing COVID-19 research papers
Large Language Model
Transformers

R
Daryaflp
15
0
Robbert V2 Dutch Base
MIT
RobBERT is the current state-of-the-art Dutch BERT model, optimized based on the RoBERTa architecture, suitable for various text classification and tagging tasks
Large Language Model Other
R
pdelobelle
7,891
29
FERNET News Sk
Monolingual RoBERTa-base model for Slovak, pretrained on a 4.5GB cleaned Slovak news corpus
Large Language Model
Transformers Other

F
fav-kky
26
3
Takalane Tsn Roberta
MIT
This is a masked language model focused on the Tswana language, aiming to enhance the performance of low-resource South African languages in the field of NLP.
Large Language Model Other
T
jannesg
24
0
Roberta Base Few Shot K 1024 Finetuned Squad Seed 0
MIT
QA model fine-tuned on the SQuAD dataset based on roberta-base
Question Answering System
Transformers

R
anas-awadalla
19
0
Roberta Zwnj Wnli Mean Tokens
Persian (ZWNJ) sentence embedding model based on RoBERTa architecture for generating sentence-level feature representations
Text Embedding
Transformers

R
m3hrdadfi
104
0
QA For Event Extraction
This model is part of the event extraction system from the ACL2021 paper, based on the RoBERTa-large architecture, fine-tuned using the QAMR dataset for zero-shot event extraction tasks.
Question Answering System
Transformers

Q
veronica320
21
7
Esperberto Small
A RoBERTa-like language model trained on Esperanto, suitable for fill-in-the-blank tasks
Large Language Model Other
E
julien-c
1,579
7
Roberta Large Squad2
MIT
A question-answering model developed based on the roberta-large architecture, specifically trained for the SQuAD 2.0 dataset
Question Answering System English
R
navteca
21
0
Klue Sentence Roberta Base Kornlu
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional vector space, suitable for tasks such as semantic search and clustering.
Text Embedding
Transformers

K
bespin-global
13
0
Roberta Base Indonesian 522M
MIT
An Indonesian pretrained model based on RoBERTa-base architecture, trained on Indonesian Wikipedia data, case insensitive.
Large Language Model Other
R
cahya
454
6
FERNET News
FERNET-News is a monolingual base model based on Czech RoBERTa, pre-trained on a thoroughly cleaned 20.5GB Czech news corpus.
Large Language Model
Transformers Other

F
fav-kky
17
0
Vietnamese Sbert
A Vietnamese sentence embedding model based on sentence-transformers, mapping text to a 768-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding
Transformers

V
keepitreal
10.54k
48
Icebert
Icelandic masked language model trained on RoBERTa-base architecture using 16GB of Icelandic text data
Large Language Model
Transformers Other

I
mideind
1,203
3
Roberta Large
A RoBERTa model pre-trained on Korean, suitable for Korean understanding tasks.
Large Language Model
Transformers Korean

R
klue
132.29k
50
Roberta Base Squad2
MIT
This is a Q&A model based on the RoBERTa architecture, specifically trained on the SQuAD 2.0 dataset, suitable for English Q&A tasks.
Question Answering System English
R
navteca
101
0
Bertin Base Stepwise
Spanish pre-trained model based on RoBERTa architecture, specialized in masked language modeling tasks
Large Language Model Spanish
B
bertin-project
15
0
Roberta2roberta L 24 Bbc
Apache-2.0
An encoder-decoder model based on RoBERTa architecture, specifically designed for extreme summarization tasks, fine-tuned on the BBC XSum dataset.
Text Generation
Transformers English

R
google
959
3
Tunbert Zied
tunbert_zied is a language model based on the Tunisian dialect, with an architecture similar to RoBERTa, trained on over 600,000 Tunisian dialect phrases.
Large Language Model
Transformers

T
ziedsb19
19
2
Bertweet Covid19 Base Uncased
MIT
BERTweet is the first large-scale publicly available language model pretrained specifically for English tweets, optimized based on the RoBERTa architecture for social media text processing.
Large Language Model
B
vinai
15
2
Roberta Base Ca Finetuned Hate Speech Offensive Catalan
Apache-2.0
Natural language inference model fine-tuned on MNLI task based on Catalan RoBERTa model
Text Classification
Transformers

R
JonatanGk
16
1
Roberta Base Thai Char
Apache-2.0
This is a RoBERTa model pre-trained on Thai Wikipedia text, using character-level embeddings to adapt to BertTokenizerFast.
Large Language Model
Transformers Other

R
KoichiYasuoka
23
0
Sinbert Large
MIT
SinBERT is a Sinhala pre-trained language model based on the RoBERTa architecture, trained on a large Sinhala monolingual corpus (sin-cc-15M).
Large Language Model
Transformers Other

S
NLPC-UOM
150
6
Sinbert Small
MIT
SinBERT is a model pretrained on a large Sinhala monolingual corpus (sin-cc-15M) based on the RoBERTa architecture, suitable for Sinhala text processing tasks.
Large Language Model
Transformers Other

S
NLPC-UOM
126
4
- 1
- 2
Featured Recommended AI Models